14 research outputs found

    Advanced Underwater Image Restoration in Complex Illumination Conditions

    Full text link
    Underwater image restoration has been a challenging problem for decades since the advent of underwater photography. Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight. However, the vast majority of uncharted underwater terrain is located beyond 200 meters depth where natural light is scarce and artificial illumination is needed. In such cases, light sources co-moving with the camera, dynamically change the scene appearance, which make shallow water restoration methods inadequate. In particular for multi-light source systems (composed of dozens of LEDs nowadays), calibrating each light is time-consuming, error-prone and tedious, and we observe that only the integrated illumination within the viewing volume of the camera is critical, rather than the individual light sources. The key idea of this paper is therefore to exploit the appearance changes of objects or the seafloor, when traversing the viewing frustum of the camera. Through new constraints assuming Lambertian surfaces, corresponding image pixels constrain the light field in front of the camera, and for each voxel a signal factor and a backscatter value are stored in a volumetric grid that can be used for very efficient image restoration of camera-light platforms, which facilitates consistently texturing large 3D models and maps that would otherwise be dominated by lighting and medium artifacts. To validate the effectiveness of our approach, we conducted extensive experiments on simulated and real-world datasets. The results of these experiments demonstrate the robustness of our approach in restoring the true albedo of objects, while mitigating the influence of lighting and medium effects. Furthermore, we demonstrate our approach can be readily extended to other scenarios, including in-air imaging with artificial illumination or other similar cases

    Refractive Geometry for Underwater Domes

    Get PDF
    Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications

    Refractive Geometry for Underwater Domes

    Get PDF
    Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications

    Virtually throwing benchmarks into the ocean for deep sea photogrammetry and image processing evaluation

    Get PDF
    Vision in the deep sea is acquiring increasing interest from many fields as the deep seafloor represents the largest surface portion onEarth. Unlike common shallow underwater imaging, deep sea imaging requires artificial lighting to illuminate the scene in perpetualdarkness. Deep sea images suffer from degradation caused by scattering, attenuation and effects of artificial light sources and havea very different appearance to images in shallow water or on land. This impairs transferring current vision methods to deep seaapplications. Development of adequate algorithms requires some data with ground truth in order to evaluate the methods. However,it is practically impossible to capture a deep sea scene also without water or artificial lighting effects. This situation impairs progressin deep sea vision research, where already synthesized images with ground truth could be a good solution. Most current methodseither render a virtual 3D model, or use atmospheric image formation models to convert real world scenes to appear as in shallowwater appearance illuminated by sunlight. Currently, there is a lack of image datasets dedicated to deep sea vision evaluation. Thispaper introduces a pipeline to synthesize deep sea images using existing real world RGB-D benchmarks, and exemplarily generatesthe deep sea twin datasets for the well known Middlebury stereo benchmarks. They can be used both for testing underwater stereomatching methods and for training and evaluating underwater image processing algorithms. This work aims towards establishingan image benchmark, which is intended particularly for deep sea vision developments

    Deep Sea Robotic Imaging Simulator

    Get PDF
    Nowadays underwater vision systems are being widely applied in ocean research. However, the largest portion of the ocean - the deep sea - still remains mostly unexplored. Only relatively few image sets have been taken from the deep sea due to the physical limitations caused by technical challenges and enormous costs. Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community. The shortage of deep sea images and the corresponding ground truth data for evaluation and training is becoming a bottleneck for the development of underwater computer vision methods. Thus, this paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs, to generate underwater image sequences taken by robots in deep ocean scenarios. Different from shallow water conditions, artificial illumination plays a vital role in deep sea image formation as it strongly affects the scene appearance. Our radiometric image formation model considers both attenuation and scattering effects with co-moving spotlights in the dark. By detailed analysis and evaluation of the underwater image formation model, we propose a 3D lookup table structure in combination with a novel rendering strategy to improve simulation performance. This enables us to integrate an interactive deep sea robotic vision simulation in the Unmanned Underwater Vehicles simulator. To inspire further deep sea vision research by the community, we release the source code of our deep sea image converter to the public (https://www.geomar.de/en/omv-research/robotic-imaging-simulator)

    Optical Imaging and Image Restoration Techniques for Deep Ocean Mapping: A Comprehensive Survey

    Get PDF
    Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research

    An Optical Digital Twin for Underwater Photogrammetry: GEODT - A Geometrically Verified Optical Digital Twin for Development, Evaluation, Training, Testing and Tuning of Multi-Media Refractive Algorithms

    Get PDF
    Most parts of the Earth’s surface are situated in the deep ocean. To explore this visually rather adversarial environment with cameras, they have to be protected by pressure housings. These housings, in turn, need interfaces to the world, enduring extreme pressures within the water column. Commonly, a flat window or a half-sphere of glass, called flat-port or dome-port, respectively is used to implement such kind of interface. Hence, multi-media interfaces, between water, glass and air are introduced, entailing refraction effects in the images taken through them. To obtain unbiased 3D measurements and to yield a geometrically faithful reconstruction of the scene, it is mandatory to deal with the effects in a proper manner. Hence, we propose an optical digital twin of an underwater environment, which has been geometrically verified to resemble a real water lab tank that features the two most common optical interfaces. It can be used to develop, evaluate, train, test and tune refractive algorithms. Alongside this paper, we publish the model for further extension, jointly with code to dynamically generate samples from the dataset. Finally, we also publish a pre-rendered dataset ready for use at https://git.geomar.de/david-nakath/geodt

    Quantitatively Monitoring Bubble-Flow at a Seep Site Offshore Oregon: Field Trials and Methodological Advances for Parallel Optical and Hydroacoustical Measurements

    Get PDF
    Two lander-based devices, the Bubble-Box and GasQuant-II, were used to investigate the spatial and temporal variability and total gas flow rates of a seep area offshore Oregon, United States. The Bubble-Box is a stereo camera–equipped lander that records bubbles inside a rising corridor with 80 Hz, allowing for automated image analyses of bubble size distributions and rising speeds. GasQuant is a hydroacoustic lander using a horizontally oriented multibeam swath to record the backscatter intensity of bubble streams passing the swath plain. The experimental set up at the Astoria Canyon site at a water depth of about 500 m aimed at calibrating the hydroacoustic GasQuant data with the visual Bubble-Box data for a spatial and temporal flow rate quantification of the site. For about 90 h in total, both systems were deployed simultaneously and pressure and temperature data were recorded using a CTD as well. Detailed image analyses show a Gaussian-like bubble size distribution of bubbles with a radius of 0.6–6 mm (mean 2.5 mm, std. dev. 0.25 mm); this is very similar to other measurements reported in the literature. Rising speeds ranged from 15 to 37 cm/s between 1- and 5-mm bubble sizes and are thus, in parts, slightly faster than reported elsewhere. Bubble sizes and calculated flow rates are rather constant over time at the two monitored bubble streams. Flow rates of these individual bubble streams are in the range of 544–1,278 mm 3 /s. One Bubble-Box data set was used to calibrate the acoustic backscatter response of the GasQuant data, enabling us to calculate a flow rate of the ensonified seep area (∼1,700 m 2 ) that ranged from 4.98 to 8.33 L/min (5.38 × 10 6 to 9.01 × 10 6 CH 4 mol/year). Such flow rates are common for seep areas of similar size, and as such, this location is classified as a normally active seep area. For deriving these acoustically based flow rates, the detailed data pre-processing considered echogram gridding methods of the swath data and bubble responses at the respective water depth. The described method uses the inverse gas flow quantification approach and gives an in-depth example of the benefits of using acoustic and optical methods in tandem

    Considering Spherical Refraction in Visual Ocean Gas Release Quantification

    No full text
    Compared to traditional gas flow quantification methods, the stereo vision system has some advantages. However, underwater vision systems usually suffer from light refraction which can degrade the measurement accuracy from images. Cameras centered in spherical glass housings, dome ports, can theoretically avoid refraction, but misalignments in the dome create even more complex refraction effects than cameras behind flat glass windows. This paper introduces the spherical refraction model into a stereo vision gas flow quantification system. Also, this paper adds some contributions to an existing bubble quantification workflow for bubble size histogram and bubble volume estimation. First, the spherical glass dome port and the light propagation are modeled, and then the camera system is calibrated via underwater/in-air image pairs. Afterwards, the Epipolar Geometry Constraint is used to optimize the bubble matching. For volume estimation, an ellipsoid triangulation method is employed to improve ellipsoidal volume estimation. According to the calibration experiments and control experiments, the results show that the stereo vision gas flow quantification system can produce the volume of gas release accurately, which satisfies the requirements of long-term gas release monitoring in marine science
    corecore